Integrating Multimodal Cues Using Grammar Based Models

نویسندگان

  • Manuel Giuliani
  • Alois Knoll
چکیده

Multimodal systems must process several input streams efficiently and represent the input in a way that allows the establishment of connections between modalities. This paper describes a multimodal system that uses Combinatory Categorial Grammars to parse several input streams and translate them into logical formulas. These logical formulas are expressed in Hybrid Logic, which is very suitable for multimodal integration because it can represent temporal relationships between modes in an abstract way. This level of abstraction makes it possible to define rules for multimodal processing in a straightforward way.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Modeling different decision strategies in a time tabled multimodal route planning by integrating the quantifier-guided OWA operators, fuzzy AHP weighting method and TOPSIS

The purpose of Multi-modal Multi-criteria Personalized Route Planning (MMPRP) is to provide an optimal route between an origin-destination pair by considering weights of effective criteria in a way this route can be a combination of public and private modes of transportation. In this paper, the fuzzy analytical hierarchy process (fuzzy AHP) and the quantifier-guided ordered weighted averaging (...

متن کامل

Reference Resolution as a facilitating process towards robust Multimodal Dialogue Management : A Cognitive Grammar Approach

This paper tries to fit a novel reference resolution mechanism into a multimodal dialogue system framework. Essentially, our aim is to show that a typical multimodal dialogue system can actually benefit from the cognitive grammar approach that we adopt for reference resolution. The central idea is to construct and update reference and context models in a manner that imparts adequate level of un...

متن کامل

Towards Natural Human-Robot Interaction using Multimodal Cues

Robust human-robot interaction in dynamic domains requires that the robot autonomously learn from sensory cues and adapt to unforeseen changes. However, the uncertainty associated with sensing and actuation on mobile robots makes autonomous operation a formidable challenge. This paper describes a novel framework for robots to incrementally learn object models and categorize objects based on mul...

متن کامل

Towards Robust Human-Robot Interaction using Multimodal Cues

Real-world domains characterized by partial observability and nondeterminism frequently make it difficult for a robot to operate without any human feedback. However, human participants are unlikely to have the time and expertise to provide elaborate and accurate feedback. The deployment of mobile robots to interact with humans in dynamic domains hence requires that the robot learn from multimod...

متن کامل

Understanding Multimodal Interaction by Exploiting Unification and Integration Rules

This paper presents a model for synergistic integration of multimodal speech and pen information. The model consists of an algorithm for matching and integrating interpretations of inputs from different modalities, as well as of a grammar that constrains integration. Integration proper is achieved by unifying feature structures. The integrator is part of a general framework for multimodal infor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007